Goto

Collaborating Authors

 gain control



What Would Happen If AI Were To Gain Control Over Humans In The Future ?

#artificialintelligence

Artificial intelligence (AI) is a rapidly developing field that has the potential to transform many aspects of our lives. One question that has been on the minds of many people is what would happen if AI were to gain control over humans in the future. While it is difficult to predict the exact trajectory of AI development, it is important to consider the potential risks and benefits of such a scenario. One possibility is that AI could become so advanced that it is able to surpass human intelligence and take control of our society. In this scenario, AI could potentially make decisions that are beyond our understanding or control. This could lead to a society where humans are no longer in charge of their own destiny, and where AI is able to manipulate and control us in ways that we cannot fully comprehend.


cGAN: Conditional Generative Adversarial Network -- How to Gain Control Over GAN Outputs

#artificialintelligence

Have you experimented with Generative Adversarial Networks (GANs) yet? If so, you may have encountered a situation where you wanted your GAN to generate a specific type of data but did not have sufficient control over GANs outputs. For example, assume you used a broad spectrum of flower images to train a GAN capable of producing fake pictures of flowers. While you can use your model to generate an image of a random flower, you cannot instruct it to create an image of, say, a tulip or a sunflower. Conditional GAN (cGAN) allows us to condition the network with additional information such as class labels.


The principles of adaptation in organisms and machines II: Thermodynamics of the Bayesian brain

Shimazaki, Hideaki

arXiv.org Machine Learning

This article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference, and introduces a view on how such dynamics is described by the laws for the entropy of neural activity, a paradigm that we call thermodynamics of the Bayesian brain. The Bayesian brain hypothesis sees the stimulus-evoked activity of neurons as an act of constructing the Bayesian posterior distribution based on the generative model of the external world that an organism possesses. A closer look at the stimulus-evoked activity at early sensory cortices reveals that feedforward connections initially mediate the stimulus-response, which is later modulated by input from recurrent connections. Importantly, not the initial response, but the delayed modulation expresses animals' cognitive states such as awareness and attention regarding the stimulus. Using a simple generative model made of a spiking neural population, we reproduce the stimulus-evoked dynamics with the delayed feedback modulation as the process of the Bayesian inference that integrates the stimulus evidence and a prior knowledge with time-delay. We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity. This view elucidates that the process of the Bayesian inference works as the recently-proposed information-theoretic engine (neural engine, an analogue of a heat engine in thermodynamics), which allows us to quantify the perceptual capacity expressed in the delayed modulation in terms of entropy.


A Note On $k$-Means Probabilistic Poverty

Kłopotek, Mieczysław A.

arXiv.org Artificial Intelligence

Kleinberg [2] coined the term of k -richness of distance-based clustering algorithms, meaning the possibility to partition a set of objects into any k nonempty (disjoint) subsets via modifying the distances between these obje cts. However, there exist non-deterministic, probabilistic algorithms which do not fi t this characterization because of non-deterministic behaviour. Therefore Ackerman at el [1, Definition 3 ( k -Richness)] introduce the concept of probabilistic k -richness. This kind of richness they defined as Property 1. For any partition Γ of the set X consisting of exactly k clusters and every ǫ 0 there exists such a distance function d that the clustering function returns this partition Γ with probability exceeding 1 ǫ . They postulate in their Fig.2 (omitting the proof) that probabilistic k -richness in probabilistic sense is possessed by version of the k -means


Another Day, Another Facebook Problem

The Atlantic - Technology

More bad news: Facebook has announced that a security exploit allowed attackers to gain control of at least 50 million user accounts. According to the company, the exploit impacted a feature that lets users see what their profile looks like to another user. In this case, the breach doesn't appear to involve extracting data from servers. Instead, the defect--introduced by a change to the way videos get uploaded--allowed users to gain control of a user's account directly, without a password. Facebook says they have fixed the vulnerability and taken steps to protect other users who could have been impacted.


Nonlinear Processing in LGN Neurons

Bonin, Vincent, Mante, Valerio, Carandini, Matteo

Neural Information Processing Systems

According to a widely held view, neurons in lateral geniculate nucleus (LGN) operate on visual stimuli in a linear fashion. There is ample evidence, however, that LGN responses are not entirely linear. To account for nonlinearities we propose a model that synthesizes more than 30 years of research in the field. Model neurons have a linear receptive field, and a nonlinear, divisive suppressive field. The suppressive field computes local root-meansquare contrast. To test this model we recorded responses from LGN of anesthetized paralyzed cats. We estimate model parameters from a basic set of measurements and show that the model can accurately predict responses to novel stimuli. The model might serve as the new standard model of LGN responses. It specifies how visual processing in LGN involves both linear filtering and divisive gain control.


Nonlinear Processing in LGN Neurons

Bonin, Vincent, Mante, Valerio, Carandini, Matteo

Neural Information Processing Systems

According to a widely held view, neurons in lateral geniculate nucleus (LGN) operate on visual stimuli in a linear fashion. There is ample evidence, however, that LGN responses are not entirely linear. To account for nonlinearities we propose a model that synthesizes more than 30 years of research in the field. Model neurons have a linear receptive field, and a nonlinear, divisive suppressive field. The suppressive field computes local root-meansquare contrast. To test this model we recorded responses from LGN of anesthetized paralyzed cats. We estimate model parameters from a basic set of measurements and show that the model can accurately predict responses to novel stimuli. The model might serve as the new standard model of LGN responses. It specifies how visual processing in LGN involves both linear filtering and divisive gain control.


Nonlinear Processing in LGN Neurons

Bonin, Vincent, Mante, Valerio, Carandini, Matteo

Neural Information Processing Systems

According to a widely held view, neurons in lateral geniculate nucleus (LGN) operate on visual stimuli in a linear fashion. There is ample evidence, however, that LGN responses are not entirely linear. To account for nonlinearities we propose a model that synthesizes more than 30 years of research in the field. Model neurons have a linear receptive field, and a nonlinear, divisive suppressive field. The suppressive field computes local root-meansquare contrast.To test this model we recorded responses from LGN of anesthetized paralyzed cats. We estimate model parameters from a basic set of measurements and show that the model can accurately predict responses to novel stimuli. The model might serve as the new standard model of LGN responses. It specifies how visual processing in LGN involves both linear filtering and divisive gain control.


Characterizing Neural Gain Control using Spike-triggered Covariance

Schwartz, Odelia, Chichilnisky, E.J., Simoncelli, Eero P.

Neural Information Processing Systems

Spike-triggered averaging techniques are effective for linear characterization of neural responses. But neurons exhibit important nonlinear behaviors, such as gain control, that are not captured by such analyses. We describe a spike-triggered covariance method for retrieving suppressive components of the gain control signal in a neuron. We demonstrate the method in simulation and on retinal ganglion cell data. Analysis of physiological data reveals significant suppressive axes and explains neural nonlinearities. This method should be applicable to other sensory areas and modalities.